444 research outputs found

    Capital goods imports and investments in Latin America in the mid 1920s

    Get PDF
    The assessment of Latin American long term economic performance is in urgent need of mobilizing more data to match the pressing demands of growth analysts. We present a systematic comparison of capital goods imports for 20 Latin American countries in 1925. It relies on both the foreign trade data of the importing countries and of the major exporting countries –the industrialized economies of the time. The quality of foreign trade figures is tested; an homogeneous estimate of capital goods imported is derived, and its per capita ranking is discussed providing new light on Latin American development levels before import substitution.Latin America, capital goods, imports, investment, foreign trade, economic development

    La gran empresa en España (1917-1974). Réplica a una nota crítica

    Get PDF

    Simple semi-supervised dependency parsing

    Get PDF
    We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92:02% to 93:16%, and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86:13% to 87:13%. In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance.Peer ReviewedPostprint (author’s final draft

    A Projected Subgradient Method for Scalable Multi-Task Learning

    Get PDF
    Recent approaches to multi-task learning have investigated the use of a variety of matrix norm regularization schemes for promoting feature sharing across tasks.In essence, these approaches aim at extending the l1 framework for sparse single task approximation to the multi-task setting. In this paper we focus on the computational complexity of training a jointly regularized model and propose an optimization algorithm whose complexity is linear with the number of training examples and O(n log n) with n being the number of parameters of the joint model. Our algorithm is based on setting jointly regularized loss minimization as a convex constrained optimization problem for which we develop an efficient projected gradient algorithm. The main contribution of this paper is the derivation of a gradient projection method with l1ââ constraints that can be performed efficiently and which has convergence rates
    • …
    corecore